从大脑活动中解码语言是医疗保健和神经科学中期待已久的目标。由于颅内设备,最近已经达到了主要里程碑:对基本语言任务的侵入性大脑反应训练的主题特定管道现在开始有效地解释可解释的功能(例如字母,单词,频谱图)。但是,将这种方法扩展到自然语音和非侵入性脑记录仍然是一个主要挑战。在这里,我们提出了一个端到端的架构,该体系结构在大量个体中进行了对比学习,以预测自然语音的自我监督的表现。我们在四个公共数据集上评估了我们的模型,其中包括169名用磁性或电脑图(M/EEG)记录的志愿者,同时他们听了自然的语音。结果表明,我们的模型可以从3s MEG信号中识别出相应的语音段,其中1,594个不同的段中最高72.5%的前10个精度(和44%的TOP-1准确性),最多可在19.1%中获得19.1%。脑电图记录的2,604个细分市场 - 因此允许训练集中不存在短语。模型比较和消融分析表明,这些性能直接从我们的原始设计选择中受益,即(i)对比目标,(ii)语音的预估计表示和(iii)在几个参与者中同时培训的常见卷积架构。这些结果共同描述了一个有希望的途径,可以从无创的大脑活动记录中实时解码自然语言处理。
translated by 谷歌翻译
For applications that require processing large amounts of text at inference time, Large Language Models (LLMs) are handicapped by their limited context windows, which are typically 2048 tokens. In-context learning, an emergent phenomenon in LLMs in sizes above a certain parameter threshold, constitutes one significant example because it can only leverage training examples that fit into the context window. Existing efforts to address the context window limitation involve training specialized architectures, which tend to be smaller than the sizes in which in-context learning manifests due to the memory footprint of processing long texts. We present Parallel Context Windows (PCW), a method that alleviates the context window restriction for any off-the-shelf LLM without further training. The key to the approach is to carve a long context into chunks (``windows'') that fit within the architecture, restrict the attention mechanism to apply only within each window, and re-use the positional embeddings among the windows. We test the PCW approach on in-context learning with models that range in size between 750 million and 178 billion parameters, and show substantial improvements for tasks with diverse input and output spaces. Our results motivate further investigation of Parallel Context Windows as a method for applying off-the-shelf LLMs in other settings that require long text sequences.
translated by 谷歌翻译
Dual encoders are now the dominant architecture for dense retrieval. Yet, we have little understanding of how they represent text, and why this leads to good performance. In this work, we shed light on this question via distributions over the vocabulary. We propose to interpret the vector representations produced by dual encoders by projecting them into the model's vocabulary space. We show that the resulting distributions over vocabulary tokens are intuitive and contain rich semantic information. We find that this view can explain some of the failure cases of dense retrievers. For example, the inability of models to handle tail entities can be explained via a tendency of the token distributions to forget some of the tokens of those entities. We leverage this insight and propose a simple way to enrich query and passage representations with lexical information at inference time, and show that this significantly improves performance compared to the original model in out-of-domain settings.
translated by 谷歌翻译
Causal transformer language models (LMs), such as GPT-3, typically require some form of positional encoding, such as positional embeddings. However, we show that LMs without any explicit positional encoding are still competitive with standard models, and that this phenomenon is robust across different datasets, model sizes, and sequence lengths. Probing experiments reveal that such models acquire an implicit notion of absolute positions throughout the network, effectively compensating for the missing information. We conjecture that causal attention enables the model to infer the number of predecessors that each token can attend to, thereby approximating its absolute position. Our findings indicate that causal LMs might derive positional awareness not only from the explicit positioning mechanism, but also from the effects of the causal mask.
translated by 谷歌翻译
NLP基准在很大程度上主要集中在短篇文本上,例如句子和段落,即使长文本在野外占相当数量的自然语言。我们介绍卷轴,这是一套需要在长文本上推理的任务套件。我们检查现有的长文本数据集,文本自然是长期的,同时优先考虑涉及在输入上扫描信息的任务。滚动包含概述,问题应答和自然语言推理任务,包括多个域,包括文学,科学,业务和娱乐。初始基线(包括啰覆编码器),表明滚动有充足的改进空间。我们以统一的文本到文本格式提供所有数据集,并托管Live Refordboard,以促进模型架构和预用方法的研究。
translated by 谷歌翻译
传统上,文本聚类方法包含在多文件摘要(MDS)中作为一种用于应对相当大的信息重复的手段。集群被利用以表明信息显着性并避免冗余。这些方法集中在聚类句子上,即使密切相关的句子也通常包含非对齐信息。在这项工作中,我们重新审视聚类方法,将命题分组为更精确的信息对齐。具体而言,我们的方法检测到突出的命题,将它们聚集到释义集群中,并通过融合其命题来为每个集群生成代表性句子。我们的摘要方法在自动胭脂评分和人类偏好中,通过了在DUC 2004和TAC 2011数据集中的先前最先进的MDS方法。
translated by 谷歌翻译
对于开放式域问题的密集检索已被证明通过在问题通道对的大型数据集上培训来实现令人印象深刻的性能。我们调查是否可以以自我监督的方式学习密集的检索,并有效地应用没有任何注释。我们观察到这种情况下的检索斗争的现有借用模型,并提出了一种设计用于检索的新预制方案:重复跨度检索。我们在文档中使用经常性跨度来创建用于对比学习的伪示例。由此产生的模型 - 蜘蛛 - 在广泛的ODQA数据集上没有任何示例,并且与BM25具有竞争力,具有强烈的稀疏基线。此外,蜘蛛通常优于DPR在其他数据集的问题上培训的DPR培训的强大基线。我们将蜘蛛与BM25结合的混合猎犬改进了所有数据集的组件,并且通常与域中DPR模型具有竞争力,这些模型培训数万例培训。
translated by 谷歌翻译
来自磁共振成像(MRI)数据的自动脑肿瘤分割在评估治疗和个性化治疗分层的肿瘤反应中起重要作用.Manual分割是乏味的,主观的脑肿瘤细分算法有可能提供目标并且快速肿瘤分割。但是,这种算法的培训需要大量数据集,这些数据集并不总是可用的。数据增强技术可以减少对大型数据集的需求。然而,当前方法主要是参数,并且可能导致次优的性能。我们引入了两个非参数化的脑肿瘤分割的数据增强方法:混合结构正则化(MSR)和Shuffle像素噪声(SPN).we评估了MSR和SPN增强对大脑肿瘤分割(BRATS)2018挑战数据集的附加值与编码器 - 解码器NNU-NNU-NNU-NET架构作为分割算法。从MSR和SPN改善NNU-NET分段与参数高斯噪声增强相比的准确性。当分别将MSR与肿瘤核心和全肿瘤实验的非参数增强分别增加了80%至82%和p值= 0.0022,00028。所提出的MSR和SPN增强有可能在其他任务中提高神经网络性能。
translated by 谷歌翻译
键形提取已在单文件设置中进行了广泛的研究,并具有大量的方法,数据集和应用程序。相反,尽管具有描述文档集的实用性及其在摘要中的用途,但很少研究多文档键形键盘提取。此外,对于多文件键孔提取,不存在先前的数据集,从而阻碍了任务的进度。多文本处理的最新进展使该任务成为追求的更具吸引力的挑战。为了刺激这种追求,我们在这里介绍了该任务的第一个数据集MK-DUC-01,它可以用作新的基准测试,并在我们的数据上测试多个键形提取基线。此外,我们还提供了对任务的简短但全面的文献回顾。
translated by 谷歌翻译
Fine-tuned language models use greedy decoding to answer reading comprehension questions with relative success. However, this approach does not ensure that the answer is a span in the given passage, nor does it guarantee that it is the most probable one. Does greedy decoding actually perform worse than an algorithm that does adhere to these properties? To study the performance and optimality of greedy decoding, we present exact-extract, a decoding algorithm that efficiently finds the most probable answer span in the context. We compare the performance of T5 with both decoding algorithms on zero-shot and few-shot extractive question answering. When no training examples are available, exact-extract significantly outperforms greedy decoding. However, greedy decoding quickly converges towards the performance of exact-extract with the introduction of a few training examples, becoming more extractive and increasingly likelier to generate the most probable span as the training set grows. We also show that self-supervised training can bias the model towards extractive behavior, increasing performance in the zero-shot setting without resorting to annotated examples. Overall, our results suggest that pretrained language models are so good at adapting to extractive question answering, that it is often enough to fine-tune on a small training set for the greedy algorithm to emulate the optimal decoding strategy.
translated by 谷歌翻译